Goto

Collaborating Authors

 unfamiliar environment


The Download: training robots for unfamiliar environments, and all-new bird flu

MIT Technology Review

What's new: It's tricky to get robots to do things in environments they've never seen before. Typically, researchers need to train them on new data for every new place they encounter, which can become very time-consuming and expensive. Now, researchers have developed a series of AI models that teach robots to complete basic tasks in new surroundings without further training or fine-tuning. What they achieved: The five AI models, called robot utility models, (RUMs), allow machines to complete five separate tasks: opening doors and drawers, and picking up tissues, bags and cylindrical objects in unfamiliar environments with a 90% success rate. The big picture: The team hope their findings will make it quicker and easier to teach robots new skills while helping them function within previously-unseen domains.


AI models let robots carry out tasks in unfamiliar environments

MIT Technology Review

The team, consisting of researchers from New York University, Meta, and the robotics company Hello Robot, hopes its findings will make it quicker and easier to teach robots new skills while helping them function within previously unseen domains. The approach could make it easier and cheaper to deploy robots in our homes. "In the past, people have focused a lot on the problem of'How do we get robots to do everything?' but not really asking'How do we get robots to do the things that they do know how to do--everywhere?'" says Mahi Shafiullah, a PhD student at New York University who worked on the project. "We looked at'How do you teach a robot to, say, open any door, anywhere?'" Teaching robots new skills generally requires a lot of data, which is pretty hard to come by.


Exploring Unseen Environments with Robots using Large Language and Vision Models through a Procedurally Generated 3D Scene Representation

S, Arjun P, Melnik, Andrew, Nandi, Gora Chand

arXiv.org Artificial Intelligence

Recent advancements in Generative Artificial Intelligence, particularly in the realm of Large Language Models (LLMs) and Large Vision Language Models (LVLMs), have enabled the prospect of leveraging cognitive planners within robotic systems. This work focuses on solving the object goal navigation problem by mimicking human cognition to attend, perceive and store task specific information and generate plans with the same. We introduce a comprehensive framework capable of exploring an unfamiliar environment in search of an object by leveraging the capabilities of Large Language Models(LLMs) and Large Vision Language Models (LVLMs) in understanding the underlying semantics of our world. A challenging task in using LLMs to generate high level sub-goals is to efficiently represent the environment around the robot. We propose to use a 3D scene modular representation, with semantically rich descriptions of the object, to provide the LLM with task relevant information. But providing the LLM with a mass of contextual information (rich 3D scene semantic representation), can lead to redundant and inefficient plans. We propose to use an LLM based pruner that leverages the capabilities of in-context learning to prune out irrelevant goal specific information.


Robots learn to get back up after a fall in an unfamiliar environment

New Scientist - News

Robots can pick themselves up after a fall, even in an unfamiliar environment, thanks to an artificially intelligent controller that can adapt to new scenarios. It could make four-legged robots more useful in responding to natural disasters, such as earthquakes. Zhibin (Alex) Li at the University of Edinburgh, UK and his colleagues used an AI technique called deep reinforcement learning to teach four-legged robots a set of basic skills, such as trotting, steering and fall recovery. This involves the robots experimenting with different ways of moving and being rewarded with a numerical score for achieving a certain goal, such as standing up after a fall, and penalised for failing. This lets the AI recognise which actions are desired and repeat them in the similar situations in the future.


Robots learn to get back up after a fall in an unfamiliar environment

New Scientist

Robots can pick themselves up after a fall, even in an unfamiliar environment, thanks to an artificially intelligent controller that can adapt to new scenarios. It could make four-legged robots more useful in responding to natural disasters, such as earthquakes. Zhibin (Alex) Li at the University of Edinburgh, UK and his colleagues used an AI technique called deep reinforcement learning to teach four-legged robots a set of basic skills, such as trotting, steering and fall recovery. This involves the robots experimenting with different ways of moving and being rewarded with a numerical score for achieving a certain goal, such as standing up after a fall, and penalised for failing. This lets the AI recognise which actions are desired and repeat them in the similar situations in the future.


Facebook has trained an AI to navigate without needing a map

#artificialintelligence

The algorithm lets robots find the shortest route in unfamiliar environments, opening the door to robots that can work inside homes and offices. The news: A team at Facebook AI has created a reinforcement learning algorithm that lets a robot find its way in an unfamiliar environment without using a map. Using just a depth-sensing camera, GPS, and compass data, the algorithm gets a robot to its goal 99.9% of the time along a route that is very close to the shortest possible path, which means no wrong turns, no backtracking, and no exploration. This is a big improvement over previous best efforts. Why it matters: Mapless route-finding is essential for next-gen robots like autonomous delivery drones or robots that work inside homes and offices.